We investigate the usage of convolutional neural networks (CNNs) for the slotfilling task in spoken language understanding. We propose a novel CNNarchitecture for sequence labeling which takes into account the previouscontext words with preserved order information and pays special attention tothe current word with its surrounding context. Moreover, it combines theinformation from the past and the future words for classification. Our proposedCNN architecture outperforms even the previously best ensembling recurrentneural network model and achieves state-of-the-art results with an F1-score of95.61% on the ATIS benchmark dataset without using any additional linguisticknowledge and resources.
展开▼